Experiences and Lessons Learnt - Ethical and Privacy Enforcement for Future Development

 

Through the development of the different tools and the platform, IMPETUS carefully considers raised issues through the various ethical, legal and data privacy assessments.

This section:

  1. provides detailed steps to carry out an effective DPIA.

  2. includes experiences with IMPETUS, while identifiying different measures to deal with workers' monitoring concerns that are raised during the PIA.

  3. provides guidances and recommandations for the analysis of Big Data gathered in the context of a smart city.

  4. presents considerations to follow when dealing with social media analysis tools.

These recommendations aim to serve as possible measures that may be enforced by the organisations implementing or wanting to implement IMPETUS tools or similar technologies.

 

How to conduct an effective DPIA?

For the DPIA, the Data controller (i.e. the public entity adopting the tool) is asked to exactly define the use case scenario and to describe the activities which imply the processing of personal data. It should be clarified from the beginning: which categories of personal data are processed, for how long and where they will be stored, who are the data subjects.

To carry out a DPIA is a responsibility of the Data controller which shall seek the advice of the data protection officer, where designated, and the Data Processors. With this in mind, the security measures applied to the IMPETUS tools were checked and mapped before the IMPETUS Live Exercises (“LEX”). The security measures have been identified and described in accordance with the list indicated in the “Handbook on security of personal data processing”, prepared by ENISA. The privacy assessment should therefore consider the following steps:

 

  • The Data Protection Officer (DPO) of the Data controller starts the DPIA by carrying out the “impact evaluation”, which represents the evaluation of the impact on the fundamental rights and freedoms of the individuals, resulting from the possible loss of security of the personal data.

  • The next step is represented by the “threat analysis”, which is made together by technicians and DPOs. A threat is any circumstance or event, which has the potential to adversely affect the security of personal data. The goal for the Data controller is to understand the threats related to the overall environment of the personal data processing (external or internal) and assess their likelihood (threat occurrence probability).

  • After evaluating the impact of the personal data processing operation and the relevant threat occurrence probability, the evaluation of risk is possible. In accordance with the verified level of risk, the DPO will be able to identify the technological and organisational security measures which are necessary and sufficient to reduce the risk to an acceptable level.

  • Organisational security measures should be adopted by the Data controller, while the technological security measures combine those already provided by the tool developer with the ones existing on the Data controller’s premises and infrastructure.

  • If the DPO establishes that the implemented security measures are not adequate, the tool provider should consider adopting additional measures, where possible, in order to mitigate the risk to an acceptable level.

  • All the different phases of the DPIA shall be reported in a dedicated document.

  • Together with the analysis of the impact assessment, it is recommended to consider other kinds of law provisions which are strictly related to privacy or ethics topics. Nevertheless, it is important to remind that not everything that is not forbidden by laws, is allowed. There are already on the market many solutions for physical and cyber security which sometimes have already been adopted by various (especially private) entities and organizations and which are subject only to little regulation or oversight.

As noticed, the use of advanced technologies, for example in video surveillance, is often dictated by a client’s budget, not by considerations of their impacts on human rights, or similar. Similarly, limits on the use of intrusive technologies are often dictated by market forces rather than regulation. Every day new technology capabilities and offerings are being added to the marketplace with little consideration of how they will impact (human) security.

How to deal with workers' monitoring?

There are many tools, like the Workload Monitoring System (“WMS”) tool, whose aim is to monitor the workload and the mental and physical status of workers in order to provide timely feedback and assure the operators can perform their tasks without being overloaded or overstressed. Such tools are usually used to prevent circumstances in which their work might be impeded, or there is unwanted fatigue and stress, reducing the effectiveness of the operators.

These functions of the tools are evaluated as important and useful in a working environment. Indeed, in a recent paper by OECD, the Organisation underlined various positive aspects of this kind of instruments which make use of AI algorithms and which are defined as “emotion AI systems”. They can be developed and implemented to detect non-verbal cues, including body language, facial expressions and tone of voice, in order to detect workers who are overworked and those whose mental well-being is at risk.

On the other hand, if these systems are not implemented well, they can threaten the physical safety and the well-being of workers, thus presenting ethical challenges and potential law violations. If employees of a security operations centre (“SOC”) are asked to wear the devices and the sensors, like the ones which are part of the WMS tool, their physical status, revealing their emotions, would be constantly monitored by the supervisor in charge. In the worst-case scenario, workers could be subject to disciplinary sanctions and they could be fired or demoted as a consequence. Moreover, the use of AI systems for systematic management leads to the risk of a reduction of space for workers’ autonomy and agency to the point where workers are deprived of dignity in their work. The OECD in its recent Paper reported, as an example of excessive monitoring, the use in some call centres of devices which give feedback to employees on the strength of their emotions to alert them of the need to calm down.

More specifically, the WMS tool is not an instrument for automated decision-making since every decision based on the outputs of the tool can be taken only by humans. This is compliant with the GDPR which provides individuals with the right not to be subject to automated decisions that have significant effects. In any case, there is still the risk that workers could be evaluated on the base of their physical and emotional reactions, which could not reflect their actions. Therefore, privacy and ethical considerations should be a deciding factor in the degree of automation that is chosen for algorithmic management in the workplace.

Data protection

AI can be used in different situations in the workplace. It can be used in the hiring and recruitment process, in assisting or augmenting workers, in assisting management, and finally in providing human resource services, such as training or healthcare plans. At this regard, data protection laws complement - but do not prevail on - employment legislation. Therefore, these pieces of legislation need to be considered together.

The nature of the personal data collected by the WMS tool, for example, may raise further concerns about possible privacy breaches and violations of human integrity or dignity. This could happen especially with wearable devices which can capture sensitive physiological data on workers’ health conditions, habits, and possibly the nature of their social interaction with other people, as reported by the OECD Paper. For example, analysis of heart-rate variability provides insights into the emotional and physical endurance of employees; while this information can be collected and used to improve employees’ health and safety, it can also be used by employers, even involuntarily, to inform consequential judgments.

 

Possible solutions and further considerations:

In considering the question of surveillance of workers, it must always be borne in mind that while workers have a right to a certain degree of privacy in the workplace, this right must be balanced against the right of the employer to control the functioning of his business and defend himself against workers' action likely to harm employers' legitimate interests, for example the employer’s liability for the action of their workers. The need for a “balancing test” has been clarified already in 2002 by the WP Art. 29 in its “Working document on the surveillance of electronic communications in the workplace”. The functioning of the “business” becomes more relevant in the context of smart cities which use the tool for SOC operators who are responsible for the safety and security of the city and of all its inhabitants.

Therefore, before being implemented in the workplace and even before requiring the due authorizations, any monitoring measure must pass an assessment. The questions indicated by the WP Art. 29 to summarise the nature of this assessment are the following:

  1. a) is the monitoring activity transparent to the workers?

  2. b) is it necessary? Could not the employer obtain the same result with traditional methods of supervision?

  3. c) is the processing of personal data proposed fair to the workers?

  4. d) is it proportionate to the concerns that it targets?

Moreover, it should be also clear that any personal data held or used in the course of workers’ monitoring must be adequate, relevant and not excessive for the purpose for which the monitoring is justified. Any monitoring must be carried out in the least intrusive way possible. It must be targeted on the area of risk, taking into account data protection rules (at this regard, see also Articles 7 and 8 of the EU Charter of Fundamental Rights and WP Art. 29, “Opinion 8/2001 on the processing of personal data in the employment context”).

At this regard, it is important to underline that the features extracted from biosignals are buffered for one minute before being deleted and the workload predictions are stored for 5 minutes before deletion. Therefore, it is not possible to have a “history” of the biosignals collected from a specific employee.

 

Informing individuals of their interactions with AI systems in the workplace is another fundamental element of ensuring transparency in AI system use.

An additional element of accountability lies in auditability. A number of firms are beginning to conduct audits to ensure that algorithms and AI systems are trustworthy. In the workplace, these audits have especially been concerned with discrimination or in anticipation of regulation. There are however a number of pre-requisites that AI audits need to satisfy in order to ensure accountability. Furthermore, not all AI systems are effectively auditable, especially if companies do not provide enough access and independence to auditors.

Additional cybersecurity measures will also be important in order to grant a safe adoption of the HCI tool by smart cities, in consideration of the nature of data collected.

Ethics should also be taken into consideration. When the AI is used in assisting management, as it would be the case with the WMS tool, the smart cities will need to adopt (or ask to adopt) technical and organisational measures to avoid, in particular:

  • inability to rectify performance decisions;

  • lack of explainability about management decisions;

  • excessive monitoring.

In conclusion, a smart city which wants to adopt the WMS tool must ensure to be compliant with:

  • data protection regulations at the national and international level;

  • soft law and future legislation on the lawful and trustworthy use of AI, to grant the respect of principles such as human oversight and transparency, especially applied to the workplace;

  • labour law, which limits the monitoring of workers and requires their prior information and prior agreements with workers’ representatives.

 

How to deal with Big Data analytics?

Among the IMPETUS tools, the UAD tool is specifically intended for the analysis of big data. During the IMPETUS Live Exercises, it processed and analysed traffic data, but the tool could be adapted also to different context of analysis and connected to different datasets.

The exact use of the UAD tool during the IMPETUS Live Exercises and its assessment has been described here, but it is worth it to consider other issues that may arise in different contexts of use, considering previous experiences with other technological tools. Big data analytics describes the science in which raw data is analyzed in order to find trends and answer questions. It involves collecting, inspecting, cleaning, summarizing and interpreting collections of related information in order to find patterns. Communications, data and datasets could be either public or proprietary, provided by clients and organizations. The latter is usually the case of datasets coming from police departments, military defense departments, etc.

An interesting case study at this regard is the software Palantir, used by many police departments in the United States and which has been the subject matter of various analysis and criticism, especially since its use spread after 9-11.

 

What considerations should be fullfilled by social media analysis tools?

When considering the impact of technologies used by public authorities to provide security services, a balance has to be found between the impact on human rights and the harm that could derive form the commission of crimes, slaughters and, in general, acts of physical violence. Tools like the Social Media Detection (“SMD”) make this balance harder. Indeed, it does not contribute to prevent actual violent crimes, such as an immediate bacteriological attack or a gunfire. Instead of this, the SMD tool allows the collection and analysis of big amount of data to identify networks of people and the interconnection of messages and actions related to the same topic, location or person. The exact use of the SMD tool during the IMPETUS Live Exercises and its assessment have been described here, but it is worth it to consider other issues that may arise in different contexts of use, considering previous experiences with similar technological tools.

Generally speaking, the SMD tool offers an Open Source Intelligence (OSINT) platform, which allows the collection of publicly-available material. The software is able to query multiple online sources of data simultaneously and aggregate them into a single searchable source which can contain a lot of records. In particular, social media services grant the access to data collected by third vendors on a commercial basis.

It is important to consider and to understand that the volume, nature and range of personal data in automated OSINT tools may lead to a more serious violation of fundamental rights than consulting data from publicly accessible online information sources, such as publicly accessible social media data or data retrieved using a generic search engine, as it has recently been underlined in an official Report.

Moreover, there are already some Court decisions which clarify the correct interpretation and the scope of application of OSINT regulations. Indeed, the most famous and relevant study case on this topic can be found outside Europe, in the United States, where military services bypassed judicial supervision by purchasing location information from third party brokers. Various newspapers reported that US military agencies and the Department of Homeland Security were buying mobile phone location data from third-party brokers to trace present and past movements of users without judicial supervision. This practice continued even though a 2018 Supreme Court ruling found that the US Constitution’s protections against “unreasonable searches and seizures” required governmental officials to get a judicial warrant in order to obtain the same information directly from phone companies. Because this information is freely available on the market, US military officials maintain that they should also be able to buy this information, even though it is used for law enforcement purposes.

There is the fear that softwares like the SMD tool (if used in a certain way), which have powerful data analytics and sentiment analysis capabilities may contibute to a “Big Brother” mass surveillance ecosystem, which can have a chilling effect on freedom of thought, opinion and expression, discouraging free discourse and expression online and offline, even in circumstances where expression is not concretely blocked.   This may not only impact freedom of expression and freedom of opinion, but also rights such as those related to health and well- being.

Warranties against this misuse of such technological tools should be provided by the legislator, but also by the practices developed within governmental bodies, which should always keep in mind and not underestimate the necessary balance between all relevant human rights.